home Today's News Magazine Archives Vendor Guide 2001 Search isdmag.com

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
Event Calendar


Resources:
Resources and Seminars
Special Sections


Information:
2001 Media Kit
About isdmag.com
Writers Wanted!
Search isdmag.com
Contact Us





Timing and Power Analysis

By Peggy Aycinena
Integrated System Design
Posted 06/18/01, 03:44:02 PM EDT

In the wicked world of design closure, power and timing analysis are evil twins separated at birth. They plague the efforts of designers to complete a design within a proscribed set of specifications and time-to-market demands. Although there are many interesting options available from various vendors today, power and timing analysis continue to torment those who choose to try to ignore them.

The two conundrums may be evil twins, but not everyone sees them as born equal. Dataquest's chief EDA analyst Gary Smith says, "Where timing was the problem of the '90s, power has become the problem of the '00s. We are presently working on the Technolgoy Road Map and it has become obvious that power is a critical path issue, if we wish to continue along the Moore's Law curve."

According to An-Chang Deng, president and COO of Nassda Corp. (Santa Clara, CA), there are a number of forces at work in power and timing that complicate mastery of the two design parameters -- shrinking process technologies and the inversesly proportional increase in circuit size, combined with the dramatic increase in data associated with the analysis. Deng says, "The challenging issue, obviously, is the volume of parasitic data. Typically, as the circuit size gets bigger, the number of data bits [associated with parasitic data] is seen as much higher than the number of transistors in the circuit. [Therefore], the big crisis is how to handle the data -- up to 1 gigabyte. The existing 32-bit machines [are inadequate]."

Dale Pollek, vice president of marketing at Celestry Design Technolgoies, Inc. (San Jose, CA) says: "At 0.13 µm, the target just doesn't get smaller; new things kick up. If you don't silicon calibrate at 0.13 µm to be silicon accurate, your models will be off by 40 to 50 percent." He argues that the EDA vendors have become too removed from the physics of the silicon and that they will ultimately have to pay attention. "A 30-million transistor integrated circuit has [many] kilometers of interconnet. [Keeping track of the parasitics] is like unwinding a golf ball. There's lots going on in there," according to Pollek.

Paying attention to the physics in the situation is, according to many, the critical part of any strategy that hopes to conquer the problems associated with power and timing analysis. As the process technologies approach 0.13 µm, 0.10 µm, and below, the inadvertant interactions between devices and adjacent interconnects become debilitating unless accounted for in the device modeling.

Lisa Lipscom, vice president of marketing for Nurlogic Design, Inc. (San Diego, CA), agrees and says that despite the phsiycal challenges, there's a lot of interest in the performance that people can get out of 0.13-µm technologies and below. "At 0.13-µm technology, people want to know what advantages they can get," she says. With so much real estate now available on-chip, those advantages are only now beginning to be understood.

Pollek quotes Celestry board member Wayne Dye: "Timing is everything." Pollek adds: "That's true] -- in the sense of the cliche and in the sense of getting the chip out. [It's very expensive to have] a half-million dollars of tooling if you don't get the silicon out. But, if you don't get the right data in, you're dead."

Meanwhile, Dataquest's Smith has been on record as a proponent of hierarchical design for a long time. Nassda's Deng agrees with him: "What are you going to do? In our view, this is a methodology issue. You do the flat extraction; you come up with every single component. The number and size of circuits is getting bigger. You simply cannot do flat. Our tools are based on hierarchical technology. This is the best solution for the large volume. We come up with back-annotation of the parasitics into the hierarchical [version of the design]."

Graham Bell, director of marketing at Nassda, says, "Hierarchical design needs to have a kind of sharing of images. [Consider a design] with 100 million components -- in a hierarchical [strategy], you can't do storage for every single component. It's more than just a case of having to solve the differential equations associated with flat design. After you solve the sub-block [in hierarchical design], you need to solve the parent. Then you need to solve the grandparent. Recursion is the key."

Deng is willing to answer an interesting question: Among the competition, who is providing the best timing and power analysis tools? He says -- initially, the "Quad design folks," who were then picked up by Synopsys. Synopsys introduced PrimeTime and eventually, in acquiring Quad, reduced the offerings to a single tool. He adds, "On the static side, Synopsys has done well." On the dynamic side, simulators have been prevelant.

At the transistor level, fast Spice technologies have abounded, with Nassda now in that area, according to Deng. Spice can handle from 1,000 to 10,000 transistors and, in some cases, from 50,000 to 100,000 transistors. The fast Spice tools from Avanti and Synopsys could handle the 1 million to 3 million transistor range. These technologies were developed over the last 5 years.

But, according to Deng, "They've kind of run their course in the innovation that they present. It turns out that, with the second generation of fast-Spice technologies, the analysis grows linearly with the circuit size." He argues that the new, third-generation of tools are needed to deal with the enormous number of on-chip devices and the incredible metrics associated with on-chip circuitry.

Pollek has the last word on the staggering metrics associated with the emerging world of ultra deep submicron integrated circuits and systems on a chip (SOC): "[We're] looking at a 1 billion-transistor DRAM. [True], it's not the same [complexitiy] as a system on a chip, but [it's a hint that] designers and their tools need to be flexible to cope!"

   Print Print this story     e-mail Send as e-mail   Back Home

Sponsor Links

All material on this site Copyright © 2001 CMP Media Inc. All rights reserved.